skip to main content


Search for: All records

Creators/Authors contains: "Hancock, Jeffrey T."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition. 
    more » « less
  2. Artificial Intelligence (AI) is a transformative force in communication and messaging strategy, with potential to disrupt traditional approaches. Large language models (LLMs), a form of AI, are capable of generating high-quality, humanlike text. We investigate the persuasive quality of AI-generated messages to understand how AI could impact public health messaging. Specifically, through a series of studies designed to characterize and evaluate generative AI in developing public health messages, we analyze COVID-19 pro-vaccination messages generated by GPT-3, a state-of-the-art instantiation of a large language model. Study 1 is a systematic evaluation of GPT-3's ability to generate pro-vaccination messages. Study 2 then observed peoples' perceptions of curated GPT-3-generated messages compared to human-authored messages released by the CDC (Centers for Disease Control and Prevention), finding that GPT-3 messages were perceived as more effective, stronger arguments, and evoked more positive attitudes than CDC messages. Finally, Study 3 assessed the role of source labels on perceived quality, finding that while participants preferred AI-generated messages, they expressed dispreference for messages that were labeled as AI-generated. The results suggest that, with human supervision, AI can be used to create effective public health messages, but that individuals prefer their public health messages to come from human institutions rather than AI sources. We propose best practices for assessing generative outputs of large language models in future social science research and ways health professionals can use AI systems to augment public health messaging.

     
    more » « less
  3. This paper examines strategies for making misinformation interventions responsive to four communities of color. Using qualitative focus groups with members of four non-profit organizations, we worked with community leaders to identify misinformation narratives, sources of exposure, and effective intervention strategies in the Asian American Pacific Islander (AAPI), Black, Latino, and Native American communities. Analyzing the findings from those focus groups, we identified several pathways through which misinformation prevention efforts can be more equitable and effective. Building from our findings, we propose steps practitioners, academics, and policymakers can take to better address the misinformation crisis within communities of color. We illustrate how these recommendations can be put into practice through examples from workshops co-designed with a non-profit working on disinformation and media literacy. 
    more » « less
  4. Abstract

    As the metaverse expands, understanding how people use virtual reality to learn and connect is increasingly important. We used the Transformed Social Interaction paradigm (Bailenson et al., 2004) to examine different avatar identities and environments over time. In Study 1 (n = 81), entitativity, presence, enjoyment, and realism increased over 8 weeks. Avatars that resembled participants increased synchrony, similarities in moment-to-moment nonverbal behaviors between participants. Moreover, self-avatars increased self-presence and realism, but decreased enjoyment, compared to uniform avatars. In Study 2 (n = 137), participants cycled through 192 unique virtual environments. As visible space increased, so did nonverbal synchrony, perceived restorativeness, entitativity, pleasure, arousal, self- and spatial presence, enjoyment, and realism. Outdoor environments increased perceived restorativeness and enjoyment more than indoor environments. Self-presence and realism increased over time in both studies. We discuss implications of avatar appearance and environmental context on social behavior in classroom contexts over time.

     
    more » « less
  5. null (Ed.)
  6. null (Ed.)
  7. null (Ed.)
  8. Abstract We define Artificial Intelligence-Mediated Communication (AI-MC) as interpersonal communication in which an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals. The recent advent of AI-MC raises new questions about how technology may shape human communication and requires re-evaluation – and potentially expansion – of many of Computer-Mediated Communication’s (CMC) key theories, frameworks, and findings. A research agenda around AI-MC should consider the design of these technologies and the psychological, linguistic, relational, policy and ethical implications of introducing AI into human–human communication. This article aims to articulate such an agenda. 
    more » « less